skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Hassani, H"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. An increasingly popular machine learning paradigm is to pretrain a neural network (NN) on many tasks offline, then adapt it to downstream tasks, often by re-training only the last linear layer of the network. This approach yields strong downstream performance in a variety of contexts, demonstrating that multitask pretraining leads to effective feature learning. Although several recent theoretical studies have shown that shallow NNs learn meaningful features when either (i) they are trained on a single task or (ii) they are linear, very little is known about the closer-to-practice case of nonlinear NNs trained on multiple tasks. In this work, we present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks. Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks. Using this observation, we show that when the tasks are binary classification tasks with labels depending on the projection of the data onto an 𝑟-dimensional subspace within the 𝑑 ≫𝑟-dimensional input space, a simple gradient-based multitask learning algorithm on a two-layer ReLU NN recovers this projection, allowing for generalization to downstream tasks with sample and neuron complexity independent of 𝑑. In contrast, we show that with high probability over the draw of a single task, training on this single task cannot guarantee to learn all 𝑟 ground-truth features. 
    more » « less
  2. We initiate the study of federated reinforcement learning under environmental heterogeneity by considering a policy evaluation problem. Our setup involves agents interacting with environments that share the same state and action space but differ in their reward functions and state transition kernels. Assuming agents can communicate via a central server, we ask: Does exchanging information expedite the process of evaluating a common policy? To answer this question, we provide the first comprehensive finite-time analysis of a federated temporal difference (TD) learning algorithm with linear function approximation, while accounting for Markovian sampling, heterogeneity in the agents' environments, and multiple local updates to save communication. Our analysis crucially relies on several novel ingredients: (i) deriving perturbation bounds on TD fixed points as a function of the heterogeneity in the agents' underlying Markov decision processes (MDPs); (ii) introducing a virtual MDP to closely approximate the dynamics of the federated TD algorithm; and (iii) using the virtual MDP to make explicit connections to federated optimization. Putting these pieces together, we rigorously prove that in a low-heterogeneity regime, exchanging model estimates leads to linear convergence speedups in the number of agents. 
    more » « less
  3. null (Ed.)
  4. The complex physical, kinematic, and chemical properties of galaxy centres make them interesting environments to examine with molecular line emission. We present new 2 − 4″ (∼75 − 150 pc at 7.7 Mpc) observations at 2 and 3 mm covering the central 50″ (∼1.9 kpc) of the nearby double-barred spiral galaxy NGC 6946 obtained with the IRAM Plateau de Bure Interferometer. We detect spectral lines from ten molecules: CO, HCN, HCO + , HNC, CS, HC 3 N, N 2 H + , C 2 H, CH 3 OH, and H 2 CO. We complemented these with published 1 mm CO observations and 33 GHz continuum observations to explore the star formation rate surface density Σ SFR on 150 pc scales. In this paper, we analyse regions associated with the inner bar of NGC 6946 – the nuclear region (NUC), the northern (NBE), and southern inner bar end (SBE) and we focus on short-spacing corrected bulk (CO) and dense gas tracers (HCN, HCO + , and HNC). We find that HCO + correlates best with Σ SFR , but the dense gas fraction ( f dense ) and star formation efficiency of the dense gas (SFE dense ) fits show different behaviours than expected from large-scale disc observations. The SBE has a higher Σ SFR , f dense , and shocked gas fraction than the NBE. We examine line ratio diagnostics and find a higher CO(2−1)/CO(1−0) ratio towards NBE than for the NUC. Moreover, comparison with existing extragalactic datasets suggests that using the HCN/HNC ratio to probe kinetic temperatures is not suitable on kiloparsec and sub-kiloparsec scales in extragalactic regions. Lastly, our study shows that the HCO + /HCN ratio might not be a unique indicator to diagnose AGN activity in galaxies. 
    more » « less